Sharp Oracle Inequalities for Square Root Regularization

نویسندگان

  • Benjamin Stucky
  • Sara A. van de Geer
چکیده

We study a set of regularization methods for high-dimensional linear regression models. These penalized estimators have the square root of the residual sum of squared errors as loss function, and any weakly decomposable norm as penalty function. This fit measure is chosen because of its property that the estimator does not depend on the unknown standard deviation of the noise. On the other hand, a generalized weakly decomposable norm penalty is very useful in being able to deal with different underlying sparsity structures. We can choose a different sparsity inducing norm depending on how we want to interpret the unknown parameter vector β. Structured sparsity norms, as defined in Micchelli et al. (2010), are special cases of weakly decomposable norms, therefore we also include the square root LASSO (Belloni et al., 2011), the group square root LASSO (Bunea et al., 2014) and a new method called the square root SLOPE (in a similar fashion to the SLOPE from Bogdan et al. 2015). For this collection of estimators our results provide sharp oracle inequalities with the Karush-Kuhn-Tucker conditions. We discuss some examples of estimators. Based on a simulation we illustrate some advantages of the square root SLOPE.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An inexact alternating direction method with SQP regularization for the structured variational inequalities

In this paper, we propose an inexact alternating direction method with square quadratic proximal  (SQP) regularization for  the structured variational inequalities. The predictor is obtained via solving SQP system  approximately  under significantly  relaxed accuracy criterion  and the new iterate is computed directly by an explicit formula derived from the original SQP method. Under appropriat...

متن کامل

Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions

We obtain estimation error rates and sharp oracle inequalities for regularization procedures of the form f̂ ∈ argmin f∈F ( 1 N N ∑ i=1 `(f(Xi), Yi) + λ ‖f‖ ) when ‖·‖ is any norm, F is a convex class of functions and ` is a Lipschitz loss function satisfying a Bernstein condition over F . We explore both the bounded and subgaussian stochastic frameworks for the distribution of the f(Xi)’s, with ...

متن کامل

Regularization of Inverse Problems with Unknown Operator

In this paper, we study statistical inverse problems. We are interested in the case where the operator is not exactly known. Using the penalized blockwise Stein’s rule, we construct an estimator that produces sharp asymptotic oracle inequalities in different settings. In particular, we consider the case, where the set of bases is not associated with the singular value decomposition. The represe...

متن کامل

Oracle Inequalities for High-dimensional Prediction

The abundance of high-dimensional data in the modern sciences has generated tremendous interest in penalized estimators such as the lasso, scaled lasso, square-root lasso, elastic net, and many others. However, the common theoretical bounds for the predictive performance of these estimators hinge on strong, in practice unverifiable assumptions on the design. In this paper, we introduce a new se...

متن کامل

Oracle inequalities for cross-validation type procedures

Abstract We prove oracle inequalities for three different type of adaptation procedures inspired by cross-validation and aggregation. These procedures are then applied to the construction of Lasso estimators and aggregation with exponential weights with data-driven regularization and temperature parameters, respectively. We also prove oracle inequalities for the crossvalidation procedure itself...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Journal of Machine Learning Research

دوره 18  شماره 

صفحات  -

تاریخ انتشار 2017